45 research outputs found

    Ontological models for quantum theory as functors

    Get PDF
    We interpret ontological models for finite-dimensional quantum theory as functors from the category of finite-dimensional Hilbert spaces and bounded linear maps to the category of measurable spaces and Markov kernels. This uniformises several earlier results, that we analyse more closely: Pusey, Barrett, and Rudolph's result rules out monoidal functors; Leifer and Maroney's result rules out functors that preserve a duality between states and measurement; Aaronson et al's result rules out functors that adhere to the Schr\"odinger equation. We also prove that it is possible to have epistemic functors that take values in signed Markov kernels.Comment: In Proceedings QPL 2019, arXiv:2004.1475

    Computationally-Secure and Composable Remote State Preparation

    Get PDF
    We introduce a protocol between a classical polynomial-time verifier and a quantum polynomial-time prover that allows the verifier to securely delegate to the prover the preparation of certain single-qubit quantum states The prover is unaware of which state he received and moreover, the verifier can check with high confidence whether the preparation was successful. The delegated preparation of single-qubit states is an elementary building block in many quantum cryptographic protocols. We expect our implementation of "random remote state preparation with verification", a functionality first defined in (Dunjko and Kashefi 2014), to be useful for removing the need for quantum communication in such protocols while keeping functionality. The main application that we detail is to a protocol for blind and verifiable delegated quantum computation (DQC) that builds on the work of (Fitzsimons and Kashefi 2018), who provided such a protocol with quantum communication. Recently, both blind an verifiable DQC were shown to be possible, under computational assumptions, with a classical polynomial-time client (Mahadev 2017, Mahadev 2018). Compared to the work of Mahadev, our protocol is more modular, applies to the measurement-based model of computation (instead of the Hamiltonian model) and is composable. Our proof of security builds on ideas introduced in (Brakerski et al. 2018)

    Computationally-Secure and Composable Remote State Preparation

    Get PDF
    We introduce a protocol between a classical polynomial-time verifier and a quantum polynomial-time prover that allows the verifier to securely delegate to the prover the preparation of certain single-qubit quantum states The prover is unaware of which state he received and moreover, the verifier can check with high confidence whether the preparation was successful. The delegated preparation of single-qubit states is an elementary building block in many quantum cryptographic protocols. We expect our implementation of "random remote state preparation with verification", a functionality first defined in (Dunjko and Kashefi 2014), to be useful for removing the need for quantum communication in such protocols while keeping functionality. The main application that we detail is to a protocol for blind and verifiable delegated quantum computation (DQC) that builds on the work of (Fitzsimons and Kashefi 2018), who provided such a protocol with quantum communication. Recently, both blind an verifiable DQC were shown to be possible, under computational assumptions, with a classical polynomial-time client (Mahadev 2017, Mahadev 2018). Compared to the work of Mahadev, our protocol is more modular, applies to the measurement-based model of computation (instead of the Hamiltonian model) and is composable. Our proof of security builds on ideas introduced in (Brakerski et al. 2018)

    Complexity-Theoretic Limitations on Blind Delegated Quantum Computation

    Get PDF
    Blind delegation protocols allow a client to delegate a computation to a server so that the server learns nothing about the input to the computation apart from its size. For the specific case of quantum computation we know that blind delegation protocols can achieve information-theoretic security. In this paper we prove, provided certain complexity-theoretic conjectures are true, that the power of information-theoretically secure blind delegation protocols for quantum computation (ITS-BQC protocols) is in a number of ways constrained. In the first part of our paper we provide some indication that ITS-BQC protocols for delegating BQP\sf BQP computations in which the client and the server interact only classically are unlikely to exist. We first show that having such a protocol with O(nd)O(n^d) bits of classical communication implies that BQPMA/O(nd)\mathsf{BQP} \subset \mathsf{MA/O(n^d)}. We conjecture that this containment is unlikely by providing an oracle relative to which BQP⊄MA/O(nd)\mathsf{BQP} \not\subset \mathsf{MA/O(n^d)}. We then show that if an ITS-BQC protocol exists with polynomial classical communication and which allows the client to delegate quantum sampling problems, then there exist non-uniform circuits of size 2nΩ(n/log(n))2^{n - \mathsf{\Omega}(n/log(n))}, making polynomially-sized queries to an NPNP\sf NP^{NP} oracle, for computing the permanent of an n×nn \times n matrix. The second part of our paper concerns ITS-BQC protocols in which the client and the server engage in one round of quantum communication and then exchange polynomially many classical messages. First, we provide a complexity-theoretic upper bound on the types of functions that could be delegated in such a protocol, namely QCMA/qpolycoQCMA/qpoly\mathsf{QCMA/qpoly \cap coQCMA/qpoly}. Then, we show that having such a protocol for delegating NP\mathsf{NP}-hard functions implies coNPNPNPNPNPPromiseQMA\mathsf{coNP^{NP^{NP}}} \subseteq \mathsf{NP^{NP^{PromiseQMA}}}.Comment: Improves upon, supersedes and corrects our earlier submission, which previously included an error in one of the main theorem

    Robust verification of quantum computation

    Get PDF
    Quantum computers promise to offer a considerable speed-up in solving certain problems, compared to the best classical algorithms. In many instances, the gap between quantum and classical running times is conjectured to be exponential. While this is great news for those applications where quantum computers would provide such an advantage, it also raises a significant challenge: how can classical computers verify the correctness of quantum computations? In attempting to answer this question, a number of protocols have been developed in which a classical client (referred to as verifier) can interact with one or more quantum servers (referred to as provers) in order to certify the correctness of a quantum computation performed by the server(s). These protocols are of one of two types: either there are multiple non-communicating provers, sharing entanglement, and the verifier is completely classical; or, there is a single prover and the classical verifier has a device for preparing or measuring quantum states. The latter type of protocols are, arguably, more relevant to near term quantum computers, since having multiple quantum computers that share a large amount of entanglement is, from a technological standpoint, extremely challenging. Before the realisation of practical single-prover protocols, a number of challenges need to be addressed: how robust are these protocols to noise on the verifier's device? Can the protocols be made fault-tolerant without significantly increasing the requirements of the verifier? How do we know that the verifier's device is operating correctly? Could this device be eliminated completely, thus having a protocol with a fully classical verifier and a single quantum prover? Our work attempts to provide answers to these questions. First, we consider a single-prover verification protocol developed by Fitzsimons and Kashefi and show that this protocol is indeed robust with respect to deviations on the quantum state prepared by the verifier. We show that this is true even if those deviations are the result of a correlation with the prover's system. We then use this result to give a verification protocol which is device- independent. The protocol consists of a verifier with a measurement device and a single prover. Device-independence means that the verifier need not trust the measurement device (nor the prover) which can be assumed to be fully malicious (though not communicating with the prover). A key element in realising this protocol is a robust technique of Reichardt, Unger and Vazirani for testing, using non-local correlations, that two untrusted devices share a large number of entangled states. This technique is referred to as rigidity of non-local correlations. Our second result is to prove a rigidity result for a type of quantum correlations known as steering correlations. To do this, we first show that steering correlations can be used in order to certify maximally entangled states, in a setting in which each test is independent of the previous one. We also show that the fidelity with which we characterise the state, in this specific test, is optimal. We then improve the previous result by removing the independence assumption. This then leads to our desired rigidity result. We make use of it, in a similar fashion to the device-independent case, in order to give a verification protocol that is one-sided device-independent. The importance of this application is to show how different trust assumptions affect the efficiency of the protocol. Next, we describe a protocol for fault-tolerantly verifying quantum computations, with minimal "quantum requirements" for the verifier. Specifically, the verifier only requires a device for measuring single-qubit states. Both this device, and the prover's operations are assumed to be prone to errors. We show that under standard assumptions about the error model, it is possible to achieve verification of quantum computation using fault-tolerant principles. As a proof of principle, and to better illustrate the inner workings of the protocol, we describe a toy implementation of the protocol in a quantum simulator, and present the results we obtained, when running it for a small computation. Finally, we explore the possibility of having a verification protocol, with a classical verifier and a single prover, such that the prover is blind with respect to the verifier's computation. We give evidence that this is not possible. In fact, our result is only concerned with blind quantum computation with a classical client, and uses complexity theoretic results to argue why it is improbable for such a protocol to exist. We then use these complexity theoretic techniques to show that a client, with the ability to prepare and send quantum states to a quantum server, would not be able to delegate arbitrary NP problems to that server. In other words, even a client with quantum capabilities cannot exploit those capabilities to delegate the computation of NP problems, while keeping the input, to that computation, private. This is again true, provided certain complexity theoretic conjectures are true

    Management of a flare up case after endodontic treatment procedure

    Get PDF
    A flare-up is defined as a pain and/or swelling of the soft tissues that occurs within a few hous or a few days following the root canal treatment. In some cases, the flare-ups can apear after the finishing of the root canal treatment, due to the penetration or development of the microorganisms into the root canal. The pain felt by the patient depends on the extent of the periradicular tissue injury, its severity and intensity of the inflammatory imune response. The article discusses the microbial irritation of apical periodontal tissue caused by insufficient instrumentation and filling of the root canals, factors that lead to failure of the outcome of root canals treatment

    Leczenie akromegalii w Rumunii. Jak blisko jesteśmy uzyskania kontroli nad chorobą?

    Get PDF
    Introduction: In Romania, no nationwide data for acromegaly treatment and control rate are available. Our objective was to assess the acromegaly control rate in a tertiary referral centre, which covers an important part of Romanian territory and population of patients with acromegaly. Materials and methods: We reviewed the records of all 164 patients (49 males and 115 females; median age 55 [47, 63.5] years) with newly or previously diagnosed acromegaly, who have been assessed at least once in our tertiary referral centre between January 1, 2012 and March 31, 2016. This sample represents 13.6% of the total expected 1200 Romanian patients with acromegaly and covers 82.9% of the counties in Romania. Control of acromegaly was defined as a random serum growth hormone (GH) < 1 ng/mL and an age-normalised serum insulin-like growth factor-I (IGF-I) value. The GH and IGF-I values used for calculation of the control rate were those at the last evaluation. The same assays for GH and IGF-I measurement were used in all patients. Results: There were 147 treated and 17 untreated patients. Of the 147 patients assessed after therapy, 137 (93.2%) had pituitary surgery, 116 (78.9%) were on medical treatment at the last evaluation, and 67 (45.5%) had radiotherapy. Seventy-one (48.3%) had a random GH < 1 ng/mL, 54 (36.7%) had a normalised, age-adjusted IGF-I, and 42 (28.6%) had both normal random serum GH and IGF-I. Conclusions: In Romania, acromegaly benefits from the whole spectrum of therapeutic interventions. However, the control rate remains disappointing.Wstęp: W Rumunii nie są dostępne ogólnokrajowe dane dotyczące leczenia akromegalii ani wskaźnika kontroli choroby. Badanie przeprowadzono w celu oceny wskaźnika kontroli akromegalii w ośrodku referencyjnym trzeciego stopnia, który obejmuje opieką zdrowotną znaczną część obszaru Rumunii i populacji pacjentów z akromegalią. Materiał i metody: Autorzy dokonali przeglądu danych medycznych wszystkich 164 chorych [49 mężczyzn i 115 kobiet; mediana wieku 55 lat (47; 63,5)] z noworozpoznaną lub wcześniej zdiagnozowaną akromegalią, których przynajmniej jednokrotnie zbadano w ośrodku referencyjnym trzeciego stopnia (miejsce pracy autorów) w okresie od 1 stycznia 2012 roku do 31 marca 2016 roku. Ta próba stanowiła 13,6% całej rumuńskiej populacji chorych na akromegalię szacowaną na 1200 osób i reprezentowała 82,9% okręgów administracyjnych w Rumunii. Kontrolę akromegalii definiowano jako stężenie przygodne hormonu wzrostu (growth hormone, GH) w surowicy wynoszące poniżej 1 ng/ml oraz normalizacja odpowiednio do wieku stężenia insulinopodobnego czynnika wzrostu 1 (insulin-like growth factor-1, IGF-1) w surowicy. Do obliczenia wskaźnika kontroli choroby stosowano wartości GH i IGF-1 z ostatnich pomiarów. U wszystkich pacjentów używano tych samych testów do pomiarów GH i IGF-1. Wyniki: Badanie obejmowało 147 chorych poddanych leczeniu i 17 chorych nieleczonych. Spośród 147 chorych ocenianych po terapii, u 137 (93,2%) zastosowano leczenie chirurgiczne, 116 (78,9%) w momencie ostatniej wizyty kontrolnej stosowało leczenie farmakologiczne, a 67 (45,5%) poddano radioterapii. U 71 chorych (48,3%) przygodne stężenie GH w surowicy wynosiło poniżej 1 ng/ml, u 54 (36,7%) uzyskano normalizację stężenia IGF-1 skorygowanego względem wieku, a u 42 chorych (28,6%) uzyskano normalizację obu parametrów — GH i IGF-1. Wnioski: W Rumunii u chorych na akromegalię stosuje się szerokie spektrum interwencji terapeutycznych, jednak wskaźnik kontroli choroby nadal pozostaje niezadawalający

    Robustness and device independence of verifiable blind quantum computing

    Get PDF
    Recent advances in theoretical and experimental quantum computing bring us closer to scalable quantum computing devices. This makes the need for protocols that verify the correct functionality of quantum operations timely and has led to the field of quantum verification. In this paper we address key challenges to make quantum verification protocols applicable to experimental implementations. We prove the robustness of the single server verifiable universal blind quantum computing protocol of Fitzsimons and Kashefi (2012) in the most general scenario. This includes the case where the purification of the deviated input state is in the hands of an adversarial server. The proved robustness property allows the composition of this protocol with a device-independent state tomography protocol that we give, which is based on the rigidity of CHSH games as proposed by Reichardt, Unger and Vazirani (2013). The resulting composite protocol has lower round complexity for the verification of entangled quantum servers with a classical verifier and, as we show, can be made fault tolerant.Comment: Shortly before uploading the first version on the arxiv, the authors became aware of parallel and independent research by Hajdusek, Perez-Delgado and Fitzsimons, which also addresses device-independent verifiable blind quantum computing and appeared the same day on the arxi
    corecore